Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: add hammercast #113

Closed
wants to merge 2 commits into from

Conversation

amaziahub
Copy link

Adopting Hammercast for More Expressive Assertions in Tests

Motivation and Logic: In this PR, I have introduced Hammercast into our test suite to improve the readability and expressiveness of our assertions. Hammercast allows us to make assertions in a more declarative style, which not only enhances the clarity of the tests but also provides better error messages when assertions fail.

For example, instead of using traditional assertions like assert len(responses.calls) == 1, we now use assert_that(responses.calls, has_length(1)), which clearly communicates the intention of the test and makes it easier to understand the expected state.

The primary motivation behind this change is to improve the developer experience when reading, writing, and debugging tests. Hammercast's expressive matchers (like has_length, is_, equal_to, and others) provide a more readable way to express conditions, aligning with the Pythonic philosophy of clear and concise code.

Scope and Changes: The changes in this PR apply Hammercast matchers across all test files. As a result, there are many lines of code affected, but it's important to note that the test logic itself remains unchanged. The tests continue to cover the same functionality and are expected to behave in the same way as before.

Plan: Although this PR involves a substantial number of line changes, the changes are limited to updating assertions and do not alter the test logic itself. Here’s my suggestion on how to approach reviewing this PR:

Review Focus: focus on the assertion changes made with Hammercast. These are mostly stylistic changes and should not affect the underlying test logic.

Further Iterations: If any tests require additional changes to better leverage Hammercast, I would love to address those in future contributions.

Personal Note: I want to take a moment to express how much I appreciate the work done on this project. I’ve been using it for a while and think it’s a fantastic tool!. I’m excited to contribute to its growth

@amaziahub amaziahub force-pushed the introduce_hammercast branch from 86759f5 to 15ab429 Compare December 2, 2024 00:11
@matthewelwell
Copy link
Contributor

Hi @amaziahub , thanks so much for the contribution, and particularly for the great PR description - super helpful.

Unfortunately, we are unable to merge this PR for a couple of reasons:

  1. As a team of python developers, this doesn't feel any more readable to us.
  2. Since we manage a number of python repositories, we want to maintain consistency across all repositories. So, even if we agreed that this made it more readable, implementing this across all tests across all of our repositories would be a huge undertaking that we're not willing to invest time in.
  3. The chosen hammercast library is relatively unknown and we want to encourage contributions from all developers (and certainly we want to encourage tests!) so we don't feel like it is something that we could easily add to our guidelines.

Thanks again for the contribution and we hope that this doesn't put you off contributing in the future. Please do consider creating an issue before submitting a PR like this one, however, so that we can discuss and agree on an approach before you invest time in it.

I am going to go ahead and close this PR, but please feel free to respond here, or raise an issue and we can continue the discussion if you have any strong feelings / disagreements with what I've said.

@amaziahub
Copy link
Author

Hi @matthewelwell,

Thank you for taking the time to review my PR and for your detailed feedback—I truly appreciate it. I completely understand your concerns regarding maintaining consistency across your repositories and the team's preferences for readability.

On Readability

One of the key motivations behind this PR was to align the test suite with Python’s philosophy of emphasizing code readability and clarity. Python’s mission statement, as articulated in PEP 20 (The Zen of Python), highlights that “Readability counts” and that code should be as human-readable as possible. Hamcrest allows us to express assertions in a declarative, almost conversational style. For example:

# Traditional assertion
assert len(responses.calls) == 1

# Hamcrest equivalent
assert_that(responses.calls, has_length(1))

The latter not only describes what we are asserting (the length of responses.calls should be 1) but also communicates the intent of the test in a way that is easier to grasp, especially for contributors who may not be familiar with the codebase. Additionally, Hamcrest's matchers provide more descriptive error messages when assertions fail, which can significantly speed up debugging.

I do, however, respect that readability is subjective and acknowledge that it might not resonate the same way with every team.

On Popularity

Regarding the popularity of PyHamcrest, I wanted to provide some additional context. PyHamcrest is widely used in the Python testing ecosystem, here are stats for example, making it a well-recognized library for expressive assertions. It's also a part of the broader xUnit ecosystem and is frequently adopted in Java and other languages, which demonstrates its versatility and maturity. I understand the preference for simplicity, but I believe leveraging such a library could have long-term benefits, especially as the codebase grows.

On Contribution Guidelines

I completely understand the recommendation to raise an issue before submitting a significant PR like this, as it helps ensure alignment early in the process. However, I noticed that this practice isn’t currently mentioned in the contribution guidelines. Adding this as a recommendation could help clarify expectations for contributors and prevent potential mismatches in the future.

That said, I respect your decision to close this PR and appreciate the time you took to review it. I’m grateful for the opportunity to engage with Flagsmith and hope to contribute again in the future.

Best regards,
Amazia Gur

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants